In order to fault-simulate a design, the designer must provide the circuit netlist and the test pattern to be applied to circuit's inputs. The test pattern is used to simulate the circuit once, yielding the expected values at the outputs of the circuit (i.e. the ``good'' machine). This step is the same as a conventional simulation. After the fault-less circuit has been simulated, the simulator proceeds to inject into the circuit one fault at a time. For each such fault, the circuit is resimulated in order to determine whether the fault is observable at the circuit's outputs. To do this, the designers must also tell the fault simulator which nodes correspond to output pins and when they are to be sampled. For example, output pins whose data is valid on 1 should be sampled only on the falling edge of 1.
Faults that cause an output pin (or pins) to deviate from the value provided by the good machine at the time of sampling are said to be observable: their effect can be detected by the test pattern in question. Note, however, that since X represents an intermediate or unknown logical value, a fault that causes an output to become X (or viceversa) may not be detectable in practice. Such faults are said to be non-deterministically observable.
Although faults on actual chips are caused by physical defects, such as breaks or shorts between the fabrication layers, the most common type of fault model used by fault simulators is the ``stuck-at'' fault. This type of fault models each faulty circuit by sticking (or forcing) a fixed value onto one of the internal nodes in the circuit. Thus, to test whether a fault at a particular node is observable, the stuck-at model simulates the circuit twice: once with the node stuck-at-0, and once with the node stuck-at-1. Since testing all nodes in the circuit in this fashion would be extremely time-consuming, most fault simulators restrict their analysis to some statistically representative number of nodes. The process of selecting which nodes are to be tested is termed fault seeding.